Topology-Aware I/O Caching for Shared Storage Systems
نویسندگان
چکیده
The main contribution of this paper is a topologyaware storage caching scheme for parallel architectures. In a parallel system with multiple storage caches, these caches form a shared cache space, and effective management of this space is a critical issue. Of particular interest is data migration (i.e., moving data from one storage cache to another at runtime), which may help reduce the distance between a data block and its customers. As the data access and sharing patterns change during execution, we can migrate data in the shared cache space to reduce access latencies. The proposed storage caching approach, which is based on the two-dimensional post-office placement model, takes advantage of the variances across the access latencies of the different storage caches (from a given node’s perspective), by selecting the most appropriate location (cache) to place a data block shared by multiple nodes. This paper also presents experimental results from our implementation of this data migration-based scheme. The results reveal that the improvements brought by our proposed scheme in average hit latency, average miss rate, and average data access latency are 29.1%, 7.0% and 32.7%, respectively, over an alternative storage caching scheme.
منابع مشابه
Data Caching in Networks with Reading, Writing and Storage Costs
Caching can significantly improve the efficiency of information access in networks by reducing the access latency and bandwidth/energy usage. However, caching in too many nodes can take up too much memory, incur extensive caching-related traffic, and hence, may even result in performance degradation. In this article, we address the problem of caching data items in networks with the objective of...
متن کاملA Novel Framework Lime Lighting Dedup Over Caches in an Tacit Dope Center
Flash memory-based caches inside VM hypervisors can reduce I/O latencies and offload much of the I/O traffic from network-attached storage systems deployed in virtualized data centers. This paper e x p l o r e s the effectiveness of content deduplication in these large (typically 100s of GB) host-side caches. Previous deduplication studies focused on data mostly at rest in backup and archive ap...
متن کاملA Case for Buffer Servers
Faster networks and cheaper storage have brought us to a point where I/O caching servers have an important role in the design of scalable, high-performance file systems. These intermediary I/O servers — or buffer servers — can be deployed at strategic points in the network, interposed between clients and data sources such as standard file servers, Internet data servers, and tertiary storage. Th...
متن کاملStorage-Aware Caching: Revisiting Caching for Heterogeneous Storage Systems
Modern storage environments are composed of a variety of devices with different performance characteristics. In this paper, we explore storage-aware caching algorithms, in which the file buffer replacement algorithm explicitly accounts for differences in performance across devices. We introduce a new family of storageaware caching algorithms that partition the cache, with one partition per devi...
متن کاملOptimizing Hierarchical Storage Management For Database System
Caching is a classical but effective way to improve system performance. To improve system performance, servers, such as database servers and storage servers, contain significant amounts of memory that act as a fast cache. Meanwhile, as new storage devices such as flash-based solid state drives (SSDs) are added to storage systems over time, using the memory cache is not the only way to improve s...
متن کامل